OpenAI’s ChatGPT has started to become an indispensable tool for many people. According to a recent study, 46% of respondents are already using ChatGPT at work, with 80% describing it as a valuable work tool that will continue to shape the future of work.
Unfortunately, cybercriminals have also recognized the immense potential of ChatGPT as a productivity enhancer and have wasted no time in exploiting its security vulnerabilities.
One of the most distressing developments is the emergence of a thriving marketplace on the dark web, where compromised ChatGPT accounts are being bought and sold. A cybersecurity research firm recently reported that more than 100,000 ChatGPT accounts have been compromised and are being traded on covert black markets. In addition to gaining unauthorized access to premium features, hackers are now able to engage and conduct various criminal acts such as disinformation campaigns, data breaches and phishing attacks on a scale not possible before.
The stolen ChatGPT Plus accounts are being sold online to individuals and businesses in regions where the platform is not permitted. These paid accounts offer higher message limits and eliminate waiting times, making them desirable for those who need unrestricted access to ChatGPT Plus.
Cybercriminals use various techniques to exploit vulnerabilities in OpenAI systems. For example, Kasada's research team discovered that hackers leverage GitHub repositories to gain unauthorized access to ChatGPT's API. This is accomplished by circumventing security controls and integrating the language model into unauthorized applications. In addition, automated credential stuffing and brute force techniques can be used to identify and exploit weak account credentials quickly.
So exactly how does a cybercriminal hack a ChatGPT account and sell it on the black market? Let’s look at the lifecycle of a hacked ChatGPT account.
- Launch the account takeover attack: The first step is for cybercriminals to gain access to the accounts through techniques such as credential stuffing or jailbreaking to gain unauthorized access to ChatGPT Plus and GPT-4 APIs. By exploiting security weaknesses, they break through the barriers protecting these APIs.
- Evade bot detection: Next attackers exploit bot detection bypasses that are designed to protect against unauthorized access to the system. They leverage various open source and inexpensive tools, including using CAPTCHA-solving services like 2Captcha to bypass ChatGPT Plus account protections.
- Use compromised ChatGPT account for malicious purposes: Once inside premium-level accounts, fraudsters take advantage of the absence of guardrails for nefarious purposes. They generate fraudulent documents, forge accounts, and orchestrate scams. Sometimes these compromised accounts are directly sold to other individuals.
- Sell jailbroken ChatGPT accounts on the dark web or other forums: The final step of this lifecycle involves the trading of jailbroken ChatGPT Plus accounts on the dark web and other online forums. These platforms cater to users seeking unrestricted access to the premium features of ChatGPT. Kasada's threat intelligence has observed these compromised accounts being sold for as low as $5 per account, representing a substantial discount ranging from 50 to 75%.
The wider implications
The implications of compromised generative AI accounts extend beyond the immediate victims. Once hackers gain unauthorized access, they can use the query histories associated with these accounts, potentially exposing sensitive personal or corporate information. In addition, cybercriminals may change the account credentials to take over the account and potentially commit further fraud.
The risks associated with compromised accounts are particularly concerning for companies that rely on CAPTCHA technology as part of their cybersecurity measures. Despite CAPTCHAs being widely used, AI-assisted CAPTCHA-solving services have emerged, enabling sophisticated bots to bypass these protections more quickly and easily.
Addressing the risks and rewards of ChatGPT
To effectively tackle the threats associated with ChatGPT while also being able to reap its benefits, it’s important that security leaders take collective responsibility, starting with OpenAI.
- OpenAI has already taken significant steps by implementing robust security measures and adopting responsible data handling practices. OpenAI’s GPT and Safety Best Practices guides provide valuable information on protective measures, including the use of tools like the Moderation API, which can identify and warn against or block certain types of unsafe content.
- Government bodies and industry leaders are increasingly recognizing their role in regulating and safeguarding against potential abuses of AI. Collaborative conversations and initiatives are already underway to establish frameworks that protect users and businesses from cybercriminals seeking to exploit AI technologies.
- Enterprises, through their security teams, also bear responsibility for mitigating risks associated with ChatGPT. Educating employees about how the models work and the potential consequence of disclosing sensitive information is crucial. By providing training on data privacy and security practices, employees can better understand the implications of their actions and take steps to prevent inadvertent data breaches. Additionally, security teams can implement restrictions on the types of prompts that can be used with ChatGPT and bolster the overall security of ChatGPT accounts and APIs, further fortifying protection against potential vulnerabilities.
However, striking the right balance between preventing malicious activities and preserving the platform's legitimate use cases remains a significant challenge. As the adoption of ChatGPT continues to accelerate, it is important for users and organizations to maintain a proactive stance against potential security threats and various forms it can take. Through training, education and proactive measures, we can harness the immense potential of ChatGPT and other models while ensuring the highest standards of security and privacy for users and businesses alike.